Information-Theoretic Determination of Minimax Rates of Convergence

نویسندگان

  • Yuhong Yang
  • Andrew Barron
چکیده

In this paper, we present some general results determining minimax bounds on statistical risk for density estimation based on certain information-theoretic considerations. These bounds depend only on metric entropy conditions and are used to identify the minimax rates of convergence.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

On the Minimax Optimality of Block Thresholded Wavelets Estimators for ?-Mixing Process

We propose a wavelet based regression function estimator for the estimation of the regression function for a sequence of ?-missing random variables with a common one-dimensional probability density function. Some asymptotic properties of the proposed estimator based on block thresholding are investigated. It is found that the estimators achieve optimal minimax convergence rates over large class...

متن کامل

Minimax Rates of Estimation for High-Dimensional Linear Regression Over q -Balls

Consider the standard linear regression model Y = Xβ+w, where Y ∈ R is an observation vector, X ∈ R is a design matrix, β ∈ R is the unknown regression vector, and w ∼ N (0,σI) is additive Gaussian noise. This paper studies the minimax rates of convergence for estimation of β for #p-losses and in the #2-prediction loss, assuming that β belongs to an #q-ball Bq(Rq) for some q ∈ [0, 1]. We show t...

متن کامل

Algorithms, Combinatorics, Information, and Beyond

Shannon information theory aims at finding fundamental limits for storage and communication, including rates of convergence to these limits. Indeed, many interesting information theoretic phenomena seem to appear in the second order asymptotics. So we first discuss precise analysis of the minimax redundancy that can be viewed as a measure of learnable or useful information. Then we highlight Ma...

متن کامل

Minimax rates of estimation for high-dimensional linear regression over $\ell_q$-balls

Consider the standard linear regression model Y = Xβ+w, where Y ∈ R is an observation vector, X ∈ R is a design matrix, β ∈ R is the unknown regression vector, and w ∼ N (0, σI) is additive Gaussian noise. This paper studies the minimax rates of convergence for estimation of β for lp-losses and in the l2-prediction loss, assuming that β belongs to an lq-ball Bq(Rq) for some q ∈ [0, 1]. We show ...

متن کامل

Minimax nonparametric classification - Part I: Rates of convergence

This paper studies minimax aspects of nonparametric classification. We first study minimax estimation of the conditional probability of a class label, given the feature variable. This function, say f , is assumed to be in a general nonparametric class. We show the minimax rate of convergence under square L2 loss is determined by the massiveness of the class as measured by metric entropy. The se...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1995